15 research outputs found

    Deep Learning Techniques for Multi-Dimensional Medical Image Analysis

    Get PDF

    Deep Learning Techniques for Multi-Dimensional Medical Image Analysis

    Get PDF

    From detection of individual metastases to classification of lymph node status at the patient level: the CAMELYON17 challenge

    Get PDF
    Automated detection of cancer metastases in lymph nodes has the potential to improve the assessment of prognosis for patients. To enable fair comparison between the algorithms for this purpose, we set up the CAMELYON17 challenge in conjunction with the IEEE International Symposium on Biomedical Imaging 2017 Conference in Melbourne. Over 300 participants registered on the challenge website, of which 23 teams submitted a total of 37 algorithms before the initial deadline. Participants were provided with 899 whole-slide images (WSIs) for developing their algorithms. The developed algorithms were evaluated based on the test set encompassing 100 patients and 500 WSIs. The evaluation metric used was a quadratic weighted Cohen's kappa. We discuss the algorithmic details of the 10 best pre-conference and two post-conference submissions. All these participants used convolutional neural networks in combination with pre- and postprocessing steps. Algorithms differed mostly in neural network architecture, training strategy, and pre- and postprocessing methodology. Overall, the kappa metric ranged from 0.89 to -0.13 across all submissions. The best results were obtained with pre-trained architectures such as ResNet. Confusion matrix analysis revealed that all participants struggled with reliably identifying isolated tumor cells, the smallest type of metastasis, with detection rates below 40%. Qualitative inspection of the results of the top participants showed categories of false positives, such as nerves or contamination, which could be targeted for further optimization. Last, we show that simple combinations of the top algorithms result in higher kappa metric values than any algorithm individually, with 0.93 for the best combination

    Cancer detection in histopathology whole-slide images using conditional random fields on deep embedded spaces

    No full text
    Advanced image analysis can lead to automated examination to histopatholgy images which is essential for ob-jective and fast cancer diagnosis. Recently deep learning methods, in particular Convolutional Neural Networks (CNNs), have shown exceptionally successful performance on medical image analysis as well as computational histopathology. Because Whole-Slide Images (WSIs) have a very large size, the CNN models are commonly applied to classify WSIs per patch. Although a CNN is trained on a large part of the input space, the spatial dependencies between patches are ignored and the inference is performed only on appearance of the individual patches. Therefore, prediction on the neighboring regions can be inconsistent. In this paper, we apply Con-ditional Random Fields (CRFs) over latent spaces of a trained deep CNN in order to jointly assign labels to the patches. In our approach, extracted compact features from intermediate layers of a CNN are considered as observations in a fully-connected CRF model. This leads to performing inference on a wider context rather than appearance of individual patches. Experiments show an improvement of approximately 3.9% on average FROC score for tumorous region detection in histopathology WSIs. Our proposed model, trained on the Camelyon171 ISBI challenge dataset, won the 2nd place with a kappa score of 0.8759 in patient-level pathologic lymph node classification for breast cancer detection

    Binon (Stéphane). Essai sur le cycle de saint Mercure, martyr de Dèce et meurtrier de l'empereur Julien

    Get PDF
    De Moreau S. J. E. Binon (Stéphane). Essai sur le cycle de saint Mercure, martyr de Dèce et meurtrier de l'empereur Julien. In: Revue belge de philologie et d'histoire, tome 18, fasc. 2-3, 1939. pp. 560-563

    Histopathology stain-color normalization using deep generative models

    No full text
    Performance of designed CAD algorithms for histopathology image analysis is affected by the amount of variations in the samples such as color and intensity of stained images. Stain-color normalization is a well-studied technique for compensating such effects at the input of CAD systems. In this paper, we introduce unsupervised generative neural networks for performing stain-color normalization. For color normalization in stained hematoxylin and eosin (H&E) images, we present three methods based on three frameworks for deep generative models: variational auto-encoder (VAE), generative adversarial networks (GAN) and deep convolutional Gaussian mixture models (DCGMM). Our contribution is defining the color normalization as a learning generative model that is able to generate various color copies of the input image through a nonlinear parametric transformation. In contrast to earlier generative models proposed for stain-color normalization, our approach does not need any labels for data or any other assumptions about the H&E image content. Furthermore, our models learn a parametric transformation during training and can convert the color information of an input image to resemble any arbitrary reference image. This property is essential in time-critical CAD systems in case of changing the reference image, since our approach does not need retraining in contrast to other proposed generative models for stain-color normalization. Experiments on histopathological H&E images with high staining variations, collected from different laboratories, show that our proposed models outperform quantitatively state-of-the-art methods in the measure of color constancy with at least 10-15%, while the converted images are visually in agreement with this performance improvement

    Localization of partially visible needles in 3D ultrasound using dilated CNNs

    No full text
    Guidance of needles for interventions that involve percutaneous advancing of a needle to a target inside the patient's body is one of the key uses of ultrasound, such as for biopsies, ablations, and nerve blocks. During these procedures, image-based detection of the needle can circumvent complex needle-transducer alignment by ensuring an adequate visualization of the needle during the entire procedure. However, successful localization in the sector and curvilinear transducers is challenging as the needle can be invisible or partially visible, due to the lack of received beam reflections from parts of the needle. Therefore, it is necessary to explicitly model the global information present in the data for correct localization of the needle to compensate for the lost signal. We present a novel image-based localization technique to detect partially visible needles in phased-array 3D ultrasound volumes using dilated convolutional neural networks. The proposed algorithm successfully detects the needle plane with accuracy in the submillimeter domain in the 20 measured datasets, which also consists of the cases with mostly invisible needle shaft

    Optimize transfer learning for lung diseases in bronchoscopy using a new concept: sequential fine-tuning

    No full text
    Bronchoscopy inspection, as a follow-up procedure next to the radiological imaging, plays a key role in the diagnosis and treatment design for lung disease patients. When performing bronchoscopy, doctors have to make a decision immediately whether to perform a biopsy. Because biopsies may cause uncontrollable and life-threatening bleeding of the lung tissue, thus doctors need to be selective with biopsies. In this paper, to help doctors to be more selective on biopsies and provide a second opinion on diagnosis, we propose a computer-aided diagnosis (CAD) system for lung diseases, including cancers and tuberculosis (TB). Based on transfer learning (TL), we propose a novel TL method on the top of DenseNet: sequential fine-tuning (SFT). Compared with traditional fine-tuning (FT) methods, our method achieves the best performance. In a data set of recruited 81 normal cases, 76 TB cases and 277 lung cancer cases, SFT provided an overall accuracy of 82% while other traditional TL methods achieved an accuracy from 70% to 74%. The detection accuracy of SFT for cancers, TB, and normal cases are 87%, 54%, and 91%, respectively. This indicates that the CAD system has the potential to improve lung disease diagnosis accuracy in bronchoscopy and it may be used to be more selective with biopsies

    Lesion Segmentation in Ultrasound Using Semi-pixel-wise Cycle Generative Adversarial Nets

    No full text
    Breast cancer is the most common invasive cancer with the highest cancer occurrence in females. Handheld ultrasound is one of the most efficient ways to identify and diagnose the breast cancer. The area and the shape information of a lesion is very helpful for clinicians to make diagnostic decisions. In this study we propose a new deep-learning scheme, semi-pixel-wise cycle generative adversarial net (SPCGAN) for segmenting the lesion in 2D ultrasound. The method takes the advantage of a fully convolutional neural network (FCN) and a generative adversarial net to segment a lesion by using prior knowledge. We compared the proposed method to a fully connected neural network and the level set segmentation method on a test dataset consisting of 32 malignant lesions and 109 benign lesions. Our proposed method achieved a Dice similarity coefficient (DSC) of 0.92 while FCN and the level set achieved 0.90 and 0.79 respectively. Particularly, for malignant lesions, our method increases the DSC (0.90) of the fully connected neural network to 0.93 significantly (p < 0.001). The results show that our SPCGAN can obtain robust segmentation results. The framework of SPCGAN is particularly effective when sufficient training samples are not available compared to FCN. Our proposed method may be used to relieve the radiologists' burden for annotation

    Robust and semantic needle detection in 3D ultrasound using orthogonal-plane convolutional neural networks

    Get PDF
    Purpose: During needle interventions, successful automated detection of the needle immediately after insertion is necessary to allow the physician identify and correct any misalignment of the needle and the target at early stages, which reduces needle passes and improves health outcomes. Methods: We present a novel approach to localize partially inserted needles in 3D ultrasound volume with high precision using convolutional neural networks. We propose two methods based on patch classification and semantic segmentation of the needle from orthogonal 2D cross-sections extracted from the volume. For patch classification, each voxel is classified from locally extracted raw data of three orthogonal planes centered on it. We propose a bootstrap resampling approach to enhance the training in our highly imbalanced data. For semantic segmentation, parts of a needle are detected in cross-sections perpendicular to the lateral and elevational axes. We propose to exploit the structural information in the data with a novel thick-slice processing approach for efficient modeling of the context. Results: Our introduced methods successfully detect 17 and 22 G needles with a single trained network, showing a robust generalized approach. Extensive ex-vivo evaluations on datasets of chicken breast and porcine leg show 80 and 84% F1-scores, respectively. Furthermore, very short needles are detected with tip localization errors of less than 0.7 mm for lengths of only 5 and 10 mm at 0.2 and 0.36 mm voxel sizes, respectively. Conclusion: Our method is able to accurately detect even very short needles, ensuring that the needle and its tip are maximally visible in the visualized plane during the entire intervention, thereby eliminating the need for advanced bi-manual coordination of the needle and transducer
    corecore